**Final project proposal** (#) Student Song Shi (##) Motivational image Ref A few sentences describing how the image conforms with this year's theme: This is an art work by Nele Azevedo called melting men. This scene perfectly depicted the power of time: melting ice, mottled stones, deading weeds... The melting tiny men are also behaving like reflecting on something, perhaps something about time. Hence, I choose this picture as this year's theme. The phenomena of light in this scene are also significant. The rough dielectric BRDF of ice, the heterogeneous absorbing media inside ice. The hair like grass features.. (##) Proposed features and points * A list of features relevant to your image that you'd like to implement along with a tentative point breakdown which sums up to 16 points. 1. Environment Map Emitter (with importance sampling) -4pt 2. Simple Extra Geometry (cube) -1pt 3. Parellization with Nanothread/C++17 Execution Policy/OpenMP -2pt 4. Simple Extra Emitters (point light) -1pt 5. Heterogeneous Volumetric Participating Media (with integrator of Path Tracing; includes 15 points from homogeneous) -10pt *5 Included: { *. Homogeneous Scattering Participating Media *. Homogeneous Absorbing Media (Beer's Law) *. Homogeneous Scattering Participating Media (with integrator of Path Tracing) *. Heterogeneous Absorbing + Emissive Media (no scattering) } (##) Description of features 1. Environment Map Emitter. For environment map, I assume it as another type of emitter. So first I add it to emitter list when parsing. I also assume that the enviroment map is a sphere surrounding the scene which has infinite radius. This sphere here is the scene.(Since scene is a surface in our implementation). So scene now has its own sample() and pdf() function.

        // ...in parser.cpp
        Color3f Scene::sample(EmitterRecord& rec, const Vec2f& rv) const{
            HitRecord hit;
            const Ray3f ray = *rec.parent;
            ScatterRecord srec;
            m_background->sample(ray, hit, srec, rv, 0);
            Ray3f outRay(rec.o, srec.wo);
            Color3f col = background(outRay);
            rec.hit.t = ray.infinity;
            rec.hit.mat = m_background.get();
            rec.wi = srec.wo;
            rec.emitter = this;
            rec.pdf = pdf(rec.o, srec.wo);
            return col/rec.pdf;
        }

        float Scene::pdf(const Vec3f& o, const Vec3f& v) const{
            HitRecord hit;
            return m_background->pdf(Ray3f(), v, hit);
        }


        // ...in parser.cpp
        if (j.contains("background"))
        {
        m_background = DartsFactory::create(j["background"]); 
        m_emitters.add_child(make_shared(*this));
        }
        
I also add two class: env_texture.cpp and env_material.cpp. The texture cpp is trivial, which is almost same as image texture. There is some special handling for env_material class. First, in constructor, I need to precompute the sine weighted pixel value. Because when converting the solid angle measure to sphere coordinate measure, there is a sine term from Jacobian. If we are going to importance sample the environmental map from sphere coordinate, we need to take this into account. Then I use this to build a 2D distribution2D for sampling discrete uv by weighted pixel value.
    // ... in env_material.cpp
    float* func = new float[size.x * size.y];
    for (int y = 0; y < envTextureImage.height(); y++) {
        float theta = ((float)y + .5f) * M_PI / (float)envTextureImage.height();
        float sinTheta = sin(theta);
        for(int x = 0; x < envTextureImage.width(); x++){
            float max_in_rgbChannels = linalg::maxelem(envTextureImage(x, y));
            func[y * envTextureImage.width() + x] = max_in_rgbChannels;
            func[y * envTextureImage.width() + x] *= sinTheta;
        }
    }

For the pdf, there is two conversion. First is from square 2D distribution to rectangular. Then from rec to sphere coordinate. This is where the denominator comes from.

    float EnvMaterial :: pdf(const Ray3f& ray, const Vec3f& scattered, const HitRecord& hit) const{
    if(m_distribution2D == nullptr){
        return INV_FOURPI;
    }
    Vec3f dir = normalize(scattered);
    Vec3f dir_rh_tocenter(dir.x, dir.y, -dir.z);
    Vec2f uv = Spherical::direction_to_equirectangular(dir_rh_tocenter);
    float pdf = m_distribution2D->pdf(uv) / (2 * M_PI * M_PI * sin(uv.y * M_PI));
    return pdf;
    }

There are some validation tests. Sine weighted texture:
weighted origin
In blender and mine: There are some different due to png/exr difference on environment map source and mine. And some due to different light settings(sphere light). But the are basically the same.
Blender Mine
And for MIS importance sampling, only thing need to add is when sampling from light, if shadow ray did not hit anything, calculate the mis pdf(light) from envmap. When sampling from BRDF, if outRay did not hit anything, calculate the mis pdf(light) from envmap, instead of 0, which is the way with only light with shape. 2. Simple Extra Geometry (cube) Similar to Box3f. I added cube.cpp. Validation:
3. Parellization. When generating ray, assign one thread during calling generation.
    // ... in scene.cpp
    drjit::blocked_range range(0, image.width() * image.height());

    // Use parallel_for to process each pixel in parallel
    drjit::parallel_for(range, [&](const drjit::blocked_range& r) {
        for (int index = r.begin(); index < r.end(); ++index) {
            int x = index % image.width();
            int y = index / image.width();

            Color3f color(0, 0, 0);
            for (int i = 0; i < m_num_samples; i++) {
                auto ray = m_camera->generate_ray(Vec2f((float)x, (float)y) + m_sampler->next2f());
                if (m_integrator != nullptr) {
                    color += m_integrator->Li(*this, *m_sampler, ray, 0) / (float)m_num_samples;
                    }
                else {
                    color += (recursive_color(ray, 0)) / (float)m_num_samples;
                    }
                }
            image(x, y) = color;
            ++progress;
            }
        });
4. Point light. For point light, I added point.cpp as a special shape. The most useful part is the sample() function. And because the pdf of hitting the point light is a delta distribution, I need to fill in the rec.pdf as nanf. This is to avoid the pdf being used in the MIS calculation. When sampling MIS, the pdf and the Le term(which is also a delta distrubution) will cancel out. Leaving the mis_weight as 1.0f.
        // ... in path_tracer_mis.cpp
        float mis_weight_light = 0;
        if (isnanf(erec.pdf)) { 
            // mis_weight_light = 1.0f / scene.emitters().child_prob(); This is wrong. 
            // ((alpha * delta / alpha * delta + const) = 1.0f)
            mis_weight_light = 1.0f;
            }
        // Normal case. 
        else { mis_weight_light = std::max(0.0f, erec.pdf / (erec.pdf + hit.mat->pdf(ray, erec.wi, hit))); }
blender mine
5.Participating media. First I implemented homogeneous scattering and absorbing media. To implement this, I made several changes to the existing code. A ray now carries the media it is in. When ray is travelling, it will sample the media it is in. If the ray's media is not vacuum, then the sample method will return a distance by sampling the distance based on free-flight pdf. A ray will also intersect with the scene to get the distance to the nearest surface. If the distance is lower than the free-flight distance, then perform media interaction.(absorption and scattering). Else, perform surface interaction. There are also some other small changes. The ray now, when perform surface interaction, will determine whether the hit surface is a boundary of media.(by a dot product). If yes, then fill in the ray's media with the internal media of the boundary and continue recursing without adding depth. Same as is when escaping the boundary. And I add materials including homogeneous and hg, which is a necessary part for media scattering sampling. For homogeneous media, I used null scattering framework and nanovdb. The only difference between is that here, I do not calculate three kinds of event (absorption, null, scattering) in one step together, like I did in homogeneous media. Instead, I choose one event and divided it by the probability of the event. Homogeneous media:
Heterogeneous media:
Blender Mine
Final scene: Final Scene